We study the problem of combining neural networks with symbolic reasoning. Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference, limiting the scalability of PNL solutions. We introduce Approximate Neurosymbolic Inference (A-NeSI): a new framework for PNL that uses neural networks for scalable approximate inference. A-NeSI 1) performs approximate inference in polynomial time without changing the semantics of probabilistic logics; 2) is trained using data generated by the background knowledge; 3) can generate symbolic explanations of predictions; and 4) can guarantee the satisfaction of logical constraints at test time, which is vital in safety-critical applications. Our experiments show that A-NeSI is the first end-to-end method to scale the Multi-digit MNISTAdd benchmark to sums of 15 MNIST digits, up from 4 in competing systems. Finally, our experiments show that A-NeSI achieves explainability and safety without a penalty in performance.
translated by 谷歌翻译
语言模型(LMS)已被证明在各种下游应用程序中很有用,例如摘要,翻译,问答和文本分类。由于它们可以存储的大量信息,LMS正在成为人工智能中越来越重要的工具。在这项工作中,我们提出了道具(提示为探测),该道具利用GPT-3(最初由OpenAI在2020年提出的大型语言模型)来执行知识基础构建任务(KBC)。 Prop实施了一种多步骤方法,该方法结合了各种提示技术来实现这一目标。我们的结果表明,手动提示策划是必不可少的,必须鼓励LM给出可变长度的答案集,特别是包括空的答案集,True/False问题是提高LM生成的建议精度的有用设备。 LM的大小是至关重要的因素,并且实体字典别名提高了LM评分。我们的评估研究表明,这些提出的技术可以大大提高最终预测的质量:Prop赢得了LM-KBC竞争的轨道2,表现优于基线36.4个百分点。我们的实施可在https://github.com/hemile/iswc-challenge上获得。
translated by 谷歌翻译
最近的工作表明,我们可以在学习系统中使用逻辑背景知识来弥补缺乏标记的培训数据。许多这样的方法通过创建编码此知识的损失函数来起作用。但是,即使在测试时间仍然有用,逻辑通常在训练后会被丢弃。相反,我们通过额外的计算步骤来完善预测来确保神经网络预测能够满足知识。我们介绍了可区分的改进功能,该功能找到了接近原始预测的校正预测。我们研究了如何有效有效地计算这些完善功能。使用新算法,我们结合了改进函数,以找到任何复杂性的逻辑公式的完善预测。该算法在复杂的SAT配方中发现了最佳的改进,以较少的迭代率明显更少,并且经常发现梯度下降无法进行的解决方案。
translated by 谷歌翻译
事实证明,基于得分的生成建模(SGM)是对有限维空间建模密度的非常有效的方法。在这项工作中,我们建议将这种方法扩展到在功能空间上学习生成模型。为此,我们代表光谱空间中的功能数据,以将过程的随机部分与其时空部分解离。然后,我们使用有限尺寸SGM从其随机组件中采样了尺寸降低技术。我们证明了我们的方法对各种多模式数据集进行建模的有效性。
translated by 谷歌翻译
基于得分的生成模型在密度估计和生成建模任务上表现出最新的性能。这些模型通常假设数据几何形状是平坦的,但已开发出最近的扩展来合成生活在Riemannian歧管上的数据。现有的加速扩散模型采样方法通常不适用于Riemannian设置,基于Riemannian得分的方法尚未适应数据集插值的重要任务。为了克服这些问题,我们介绍了\ emph {riemannian扩散schr \“ odinger桥}。我们提出的方法概括了扩散的schr \“ \ cite {debortoli2021neurips}中引入的odinger桥,向非欧国性分数设置超出了Riemannian Score的模型,并扩展第一次逆转。我们验证我们提出的关于合成数据以及真实地球和气候数据的方法。
translated by 谷歌翻译
Morphological neural networks allow to learn the weights of a structuring function knowing the desired output image. However, those networks are not intrinsically robust to lighting variations in images with an optical cause, such as a change of light intensity. In this paper, we introduce a morphological neural network which possesses such a robustness to lighting variations. It is based on the recent framework of Logarithmic Mathematical Morphology (LMM), i.e. Mathematical Morphology defined with the Logarithmic Image Processing (LIP) model. This model has a LIP additive law which simulates in images a variation of the light intensity. We especially learn the structuring function of a LMM operator robust to those variations, namely : the map of LIP-additive Asplund distances. Results in images show that our neural network verifies the required property.
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
考虑到未标记数据的性质,部分标记的培训数据集包含属于新型类别的样本是很常见的。尽管这些所谓的观察到的新类别存在于培训数据中,但它们不属于任何培训标签。相反,开放集将新类别定义为在训练过程中未观察到的类别,但在测试过程中存在。这项研究是第一个通过利用未标记的数据或开放式LACU来概括的新学习政策中观察到的新学习政策和未观察到的新型类别的研究。这项研究对新颖性检测进行了高级综述,以区分涉及观察到的新类别的研究领域以及涉及未观察到的新颖类别的研究领域。然后将Open-Lacu作为相关领域的合成,以维持每个学习策略中每个领域的优势。目前,我们正在敲定第一个开放式LACU网络,该网络将与此预印刷结合使用,以供出版。
translated by 谷歌翻译
Modeling lies at the core of both the financial and the insurance industry for a wide variety of tasks. The rise and development of machine learning and deep learning models have created many opportunities to improve our modeling toolbox. Breakthroughs in these fields often come with the requirement of large amounts of data. Such large datasets are often not publicly available in finance and insurance, mainly due to privacy and ethics concerns. This lack of data is currently one of the main hurdles in developing better models. One possible option to alleviating this issue is generative modeling. Generative models are capable of simulating fake but realistic-looking data, also referred to as synthetic data, that can be shared more freely. Generative Adversarial Networks (GANs) is such a model that increases our capacity to fit very high-dimensional distributions of data. While research on GANs is an active topic in fields like computer vision, they have found limited adoption within the human sciences, like economics and insurance. Reason for this is that in these fields, most questions are inherently about identification of causal effects, while to this day neural networks, which are at the center of the GAN framework, focus mostly on high-dimensional correlations. In this paper we study the causal preservation capabilities of GANs and whether the produced synthetic data can reliably be used to answer causal questions. This is done by performing causal analyses on the synthetic data, produced by a GAN, with increasingly more lenient assumptions. We consider the cross-sectional case, the time series case and the case with a complete structural model. It is shown that in the simple cross-sectional scenario where correlation equals causation the GAN preserves causality, but that challenges arise for more advanced analyses.
translated by 谷歌翻译
We present the interpretable meta neural ordinary differential equation (iMODE) method to rapidly learn generalizable (i.e., not parameter-specific) dynamics from trajectories of multiple dynamical systems that vary in their physical parameters. The iMODE method learns meta-knowledge, the functional variations of the force field of dynamical system instances without knowing the physical parameters, by adopting a bi-level optimization framework: an outer level capturing the common force field form among studied dynamical system instances and an inner level adapting to individual system instances. A priori physical knowledge can be conveniently embedded in the neural network architecture as inductive bias, such as conservative force field and Euclidean symmetry. With the learned meta-knowledge, iMODE can model an unseen system within seconds, and inversely reveal knowledge on the physical parameters of a system, or as a Neural Gauge to "measure" the physical parameters of an unseen system with observed trajectories. We test the validity of the iMODE method on bistable, double pendulum, Van der Pol, Slinky, and reaction-diffusion systems.
translated by 谷歌翻译